Situational Awareness
The bullish case is made by Leopold Aschenbrenner in his 160-page e-book Situational Awareness. Using straight lines on a graph, he argues that super-intelligence is possible – likely – by 2027, and that even if he’s off by a few years, we are clearly that direction. He explores the consequences for geopolitical rivalry, national defense, and argues that AI Labs must take strict security precautions immediately. (summarized by Zvi)
I find it an insane proposition that the US government will let a random SF startup develop superintelligence. Imagine if we had developed atomic bombs by letting Uber just improvise.
Max Read has a more detailed takedown but be careful: Max Read takes down a lot of people, especially anyone to the right of Mao.